Back

Journal of Computational Neuroscience

Springer Science and Business Media LLC

Preprints posted in the last 30 days, ranked by how well they match Journal of Computational Neuroscience's content profile, based on 23 papers previously published here. The average preprint has a 0.02% match score for this journal, so anything above that is already an above-average fit.

1
Phase resetting of in-phase synchronized Hodgkin-Huxleydynamics under voltage perturbation reveals reduced null space

Gupta, R.; Karmeshu, ; Singh, R. K. B.

2026-03-24 neuroscience 10.64898/2026.03.21.713085 medRxiv
Top 0.1%
17.1%
Show abstract

Voltage perturbations to a repetitively firing Hodgkin-Huxley (HH) model of neuronal spiking in the bistable regime with coexisting limit cycle and stable steady node can either lead to the spikes phase resetting or collapse to the stable steady state. The latter describes a non-firing hyperpolarized quiescent state of the neuron despite the presence of constant external current. Using asymptotic phase response curve (PRC), the impact of voltage perturbations on a repetitively firing HH model is studied here while it is diffusively coupled to another HH model under identical external stimulation. It is observed that the pre-perturbation state of synchronization and the coupling strength critically determine the PRC response of the perturbed HH dynamics. Higher coupling strengths of perfectly in-phase (anti-phase) synchronized HH models shrink (expand) the combinatorial space of perturbation strengths and the oscillation phases causing collapse to the quiescent state. This indicates reduced (enlarged) basin of attraction, viz. the null space, associated with the steady state in the HH phase space. The findings bear important implications to the spiking dynamics of diverse interneurons, as well as special cases of pyramidal neurons, coupled through electrical synapses via. gap junctions, and suggest the role of gap junction plasticity in tuning vulnerability to quiescent state in the presence of biological noise and spikelets.

2
Functional distinction between ionic and electric ephaptic effects on neuronal firing dynamics

Hauge, E.; Saetra, M. J.; Einevoll, G.; Halnes, G.

2026-03-30 neuroscience 10.64898/2026.03.26.714388 medRxiv
Top 0.1%
10.5%
Show abstract

Neuronal activity alters extracellular ion concentrations and electric potentials. Ephaptic effects refer to the feedback influence that these extracellular changes can have on neuronal activity. While electric ephaptic effects occur on a fast timescale due to extracellular potential perturbations, ionic ephaptic effects are driven by slower, accumulative changes in ion concentrations. Among the previous computational studies of ephaptic effects, the vast majority have focused exclusively on electric effects, while ionic ephaptic effects have largely been neglected. In this work, we present an electrodiffusive computational framework consisting of two-compartment neurons that interact via a shared extracellular space. By accounting for both electric potentials and ion-concentration dynamics in a self-consistent manner, our framework enables us to explore the relative roles of electric and ionic ephaptic effects. Through numerical experiments, we demonstrate that ionic and electric ephaptic interactions play very different roles. While ionic ephaptic interactions increase population firing rates, electric ephaptic interactions primarily drive subtle shifts in spike timing. Furthermore, we show that these spike shifts cause the phase difference (the distance in spike times between a small collection of neurons) to converge to a stable, unique phase difference, which we coin the ephaptic intrinsic phase preference. Author summaryNeurons predominantly communicate through synapses: specialized contact points where a brief electrical signal, known as a spike or action potential, in one neuron influences another. Neurons generate these spikes by exchanging ions with the surrounding extracellular space. This way, spiking neurons alter extracellular ion concentrations and electric potentials. Since neurons are sensitive to such changes in their environment, they can also influence one another indirectly through the shared extracellular medium. This form of non-synaptic interaction is known as ephaptic coupling. Most computational models of neuronal activity neglect ephaptic interactions, and those that include them typically consider only electric effects while ignoring ionic contributions. As a result, the relative roles of electric and ionic ephaptic effects remain poorly understood. Here, we introduce a computational framework that accounts for both mechanisms in a self-consistent way. Our results show a functional distinction: ionic ephaptic effects act slowly, regulating population firing rates, whereas electric ephaptic effects act on millisecond timescales and subtly shift spike timing. These shifts cause spike-time differences between neurons to converge to a stable value, a phenomenon we call ephaptic intrinsic phase preference.

3
How to train your neuron: Developing a detailed, up-to-date, multipurpose model of hippocampal CA1 pyramidal cells

Tar, L.; Saray, S.; Mohacsi, M.; Freund, T. F.; Kali, S.

2026-03-20 neuroscience 10.64898/2026.03.19.712861 medRxiv
Top 0.1%
7.2%
Show abstract

Anatomically and biophysically detailed models of neurons have been widely used to study information processing in these cells. Most studies focused on understanding specific phenomena, while more general models that aim to capture various cellular processes simultaneously remain rare even though such models are required to predict neuronal behavior under more complex, natural conditions. In this study, we aimed to develop a detailed, data-driven, general-purpose biophysical model of hippocampal CA1 pyramidal neurons. We leveraged extensive morphological, biophysical and physiological data available for this cell type, and established a systematic workflow for model construction and validation that relies on our recently developed software tools. The model is based on a high-quality morphological reconstruction and includes a diverse curated set of ion channel models. After incorporating the available constraints on the distribution of ion channels, the remaining free parameters were optimized using the Neuroptimus tool to fit a variety of electrophysiological features extracted from somatic whole-cell recordings. Validation using HippoUnit confirmed the models ability to replicate key electrophysiological features, including somatic voltage responses to current input, the attenuation of synaptic potentials and backpropagating action potentials, and nonlinear synaptic integration in oblique dendrites. Our model also included active dendritic spines, modeled either explicitly or by merging their biophysical mechanisms into those of the parent dendrite. We found that many aspects of neuronal behavior were unaffected by the level of detail in modeling spines, but modeling nonlinear synaptic integration accurately required the explicit modeling of spines. Our data-driven model of CA1 pyramidal cells matching diverse experimental constraints is a general tool for the investigation of the activity and plasticity of these cells and can also be a reliable component of detailed models of the hippocampal network. Our systematic approach to building and validating general-purpose models should apply to other cell types as well. Author SummaryThe brain processes information through the activity of billions of individual neurons. To understand how these cells work, scientists build detailed computer models that reproduce their electrical behavior. These models make it possible to explore situations that are difficult or impossible to test experimentally. However, many existing neuron models were designed to explain only a few specific phenomena, which limits their usefulness in more complex settings. In this study, we developed a comprehensive computer model of a hippocampal CA1 pyramidal neuron, a cell type that plays a central role in learning and memory. We built the model using extensive experimental data and applied automated methods to ensure that it reproduces a broad range of observed neuronal behaviors. We also examined how small structures called dendritic spines--tiny protrusions where most synaptic communication occurs--affect how neurons combine incoming signals. We found that even simplified models without individual spines can capture many aspects of neuronal activity, but understanding more complex forms of signal integration requires modeling spines explicitly. Our work also supports the development of more realistic simulations of brain circuits.

4
Functionally convergent but parametrically distinct solutions: Robust degeneracy in a population of computational models of early-birth rat CA1 pyramidal neurons

Tomko, M.; Lupascu, C. A.; Filipova, A.; Jedlicka, P.; Lacinova, L.; Migliore, M.

2026-04-01 neuroscience 10.64898/2026.03.30.715207 medRxiv
Top 0.1%
6.9%
Show abstract

BackgroundFlexibility and robustness of neuronal function are closely linked to degeneracy, the ability of distinct structural or parametric configurations to produce similar functional outcomes. At the cellular level, this often manifests as ion-channel degeneracy, in which multiple combinations of intrinsic conductances yield comparable electrophysiological phenotypes. MethodologyWe used a population-based, data-driven modelling framework to generate large ensembles of biophysically detailed CA1 pyramidal neuron models constrained by somatic electrophysiological features extracted from patch-clamp recordings in acute slices from early-birth rats. 10 reconstructed morphologies were incorporated, and model populations were analyzed using parameter correlation analysis, principal component analysis, and generalization tests to assess robustness, degeneracy, and morphology dependence of intrinsic properties. ConclusionsAcross the model population, similar somatic firing behaviours emerged from widely different combinations of intrinsic parameters, demonstrating robust two-level ion channel degeneracy both within and across morphologies. Each morphology occupied a distinct region of parameter space, indicating morphology-specific compensatory effects, while weak pairwise parameter correlations suggested distributed compensation rather than tight parameter dependencies. Even with a fixed morphology, multiple parameter subspaces supported comparable electrophysiological phenotypes. Generalization across morphologies was structure-dependent and non-reciprocal, with successful parameter similarity occurring preferentially between structurally similar neurons. Interestingly, to accurately simulate spike-frequency adaptation, it was important to retain some kinetic properties of the ion channel models as free parameters during optimization. Together, these findings show that dendrite morphology shapes the valid parameter space, and similar electrophysiology of CA1 pyramidal neurons arises from the interplay between structural variability and ion-channel diversity. This work highlights the importance of population-based modelling for capturing biological variability and provides insights into how neuronal robustness might be maintained despite substantial heterogeneity, and offers a scalable pipeline for generating biophysically realistic CA1 neuron populations for use in network simulations. Author summaryNeurons must reliably process information even though their internal components, such as ion channels and cellular shape, can vary widely from cell to cell. How stable behaviour emerges from such variability is a fundamental question in neuroscience. In this study, we explored this problem using detailed computer models of early-birth rat hippocampal CA1 pyramidal neurons, a cell type that plays a central role in learning and memory. Instead of building a single "average" neuron model, we created large populations of models that all reproduced key experimental recordings but differed in their internal parameters. We found that neurons with different shapes and different combinations of ion channels could nevertheless generate similar electrical activity. This phenomenon, known as ion channel degeneracy, allows neurons to remain functional despite biological variability or perturbations. Our results show that neuronal shape strongly influences which parameter combinations are viable, but that multiple solutions exist even for the same morphology. The population of models we provide offers a resource for future studies of early-birth CA1 pyramidal cell function and dysfunction.

5
Postsynaptic integration of excitatory and inhibitory signals based on an adaptive firing threshold

Gambrell, O.; Singh, A.

2026-03-26 neuroscience 10.64898/2026.03.26.714497 medRxiv
Top 0.1%
4.9%
Show abstract

A key component of intraneuronal communication is the modulation of postsynaptic firing frequencies by stochastic transmitter release from presynaptic neurons. The time interval between successive postsynaptic firings is called the inter-spike interval (ISI), and understanding its statistics is integral to neural information processing. We start with a model of an excitatory chemical synapse with postsynaptic neuron firing governed as per a classical integrate-and-fire model. Using a first-passage time framework, we derive exact analytical results for the ISI statistical moments, revealing parameter regimes driving precision in postsynaptic action potential timing. Next, we extended this analysis to include both an excitatory and an inhibitory presynaptic connection onto the same postsynaptic neuron. We consider both a fixed postsynaptic-firing threshold and a threshold that adapts based on the postsynaptic membrane potential history. Our analysis shows that the latter adaptive threshold can result in scenarios where increasing the inhibitory input frequency increases the postsynaptic firing frequency. Moreover, we characterize parameter regimes where ISI noise is hypo-exponential or hyperexponential based on its coefficient of variation being less than or higher than one, respectively.

6
A neurocomputational model of observation-based decision making with a focus on trust

Hassanejad Nazir, A.; Hellgren Kotaleski, J.; Liljenström, H.

2026-03-26 neuroscience 10.64898/2026.03.24.713845 medRxiv
Top 0.1%
2.8%
Show abstract

As social beings, humans make decisions partly based on social interaction. Observing the behavior of others can lead to learning from and about them, potentially increasing trust and prompting trust-based behavioral changes. Observation-based decision making involves different neural structures. The orbitofrontal cortex (OFC) and lateral prefrontal cortex (LPFC) are known as neural structures mainly involved in processing emotional and cognitive decision values, respectively, while the anterior cingulate cortex (ACC) plays a pivotal role as a social hub, integrating the afferent expectancy signals from OFC and LPFC. This paper presents a neurocomputational model of the interplay between observational learning and trust, as well as their role in individual decision-making. Our model elucidates and predicts the emotional and rational behavioral changes of an individual influenced by observing the action-outcome association of an alleged expert. We have modeled the neurodynamics of three cortical structures (OFC, LPFC, and ACC) and their interactions, where the neural oscillatory properties, modeled with Dynamic Bayesian Probability, represent the observers attitude towards the expert and the decision options. As an example of an everyday behavioral situation related to climate change, we use the choice of transportation between home and work. The EEG-like simulation outputs from our model represent the presumed brain activity of an individual making such a choice, assuming the decision-maker is exposed to social information.

7
The transfer function as a tool to reduce morphological models into point-neuron models

Daou, M.; Jovanic, T.; Destexhe, A.

2026-03-24 neuroscience 10.64898/2026.03.20.713213 medRxiv
Top 0.2%
2.2%
Show abstract

Building a simple model that precisely and functionally characterizes a neuron is a challenging and important task to select the best concise and computationally efficient model. However, this type of work has only been done for subthreshold properties of neurons. Here, we take a different perspective and suggest a method to obtain point-neuron models from morphologically-detailed models with dendrites. To do this, we focus on the functional characterization of the neuron response under in vivo conditions, and compute the transfer function of the detailed model. The parameters of this transfer function, in terms of mean voltage, voltage standard deviation and correlation time, can be used to compute the "best" point-neuron model that generates a transfer function very close to that of the morphologically-detailed model. We illustrate this approach for two very different neuronal morphologies, one from Drosophila larvae and one from mammals. In conclusion, this approach provides a tool to generate point-neuron models from detailed models, based on a functional characterization of the neuron response. Significance StatementThis study provides a new computational method to reduce morphological models into point-neuron models. To do so, we calculate the transfer function parameters, ie the voltage standard deviation, the mean voltage and the correlation time, of the morphological model and fit a point neuron-model onto this data. Here, we successfully apply this approach for two very different neuron morphologies, a drosophila neuron and a rat motoneuron.

8
Origin and functional impact of early nonlinearities in primate retina

Raval, V.; Oaks-Leaf, R.; Chen, Q.; Rieke, F.

2026-03-23 neuroscience 10.64898/2026.03.19.713068 medRxiv
Top 0.2%
2.1%
Show abstract

Receptive fields provide a concise description of the stimulus selectivity of visual neurons. But this stimulus selectivity is neither static nor linear, and these nonlinear effects are not well captured by standard linear or pseudo-linear receptive field models. At the same time, receptive field models incorporating nonlinear effects are largely empirical, and are not easily interpreted in terms of underlying cellular and synaptic mechanisms. Here we show that two nonlinear mechanisms in the primate outer retina shape neural responses and that these contribute significantly to responses to natural stimuli and to the retinal output signals. Incorporating these outer retinal nonlinearities into models for visual function will improve our ability to identify the mechanistic origin of specific features of downstream visual responses.

9
Synchronization properties in C. elegans: Relating behavioral circuits to structural and functional neuronal connectivity

Sar, G. K.; Patton, A.; Towlson, E.; Davidsen, J.

2026-03-25 neuroscience 10.64898/2026.03.23.713580 medRxiv
Top 0.2%
1.9%
Show abstract

A central question in neuroscience is how neural processing generates or encodes behavior. Caenorhabditis elegans is well suited to addressing this question, given its compact nervous system and near-complete structural connectome. Despite this, findings from previous studies remain inconclusive. While some have shown that the connectome can robustly encode specific behaviors such as locomotion, others report that functional connectivity can be reconfigured across behaviors. We aim to understand the relationship between structural connectivity, functional connectivity and biological behavior in silico by using an experimentally motivated computational model leveraging the structural connectome. Stimulation of specific neurons in the model induces oscillatory neural responses, enabling us to infer neuronal functional connectivity. Functional connectivity is found to be stronger among some neurons, allowing us to identify functional communities. We find that electrical synapses play a critical role in determining functional communities, and the resulting mesoscale functional architecture is predominantly gap junctionally assortative. Furthermore, comparison with behavioral circuits shows that locomotion circuits are largely segregated into distinct functional communities while other circuits are more distributed across multiple functional communities. We also observe that stimulation of neurons belonging to these distributed circuits elicits a more synchronized neuronal response compared to stimulation of neurons within the more segregated circuits. This is consistent with the presence of behavioral patterns that originate in one circuit and terminate in another (e.g., chemosensation leading to locomotion), such that stimulation of one circuit can activate the other and eventually result in a synchronized response. We also find a large repertoire of chimera-like synchronization patterns upon stimulation of certain behavioral circuits (chemosensation, mechanosensation) indicating high dynamical flexibility. Overall, our results demonstrate that while certain behaviors are governed by functionally segregated circuits, others emerge from the synchronization of multiple functional communities, which are, to begin with, influenced by the underlying structural connectivity. Author summaryAnimals constantly transform sensory inputs into actions, but it is still unclear how this mapping from neural activity to behavior is implemented in a real nervous system. Caenorhabditis elegans offers a unique testbed for this question because its entire wiring diagram is nearly completely mapped. Yet, previous works have reached mixed conclusions about how well this anatomical circuit diagram predicts actual patterns of activity and behavior. Here, we use a biologically inspired computational model of the C. elegans nervous system to bridge this gap between structure, function, and behavior. By virtually stimulating individual neurons and observing the resulting network-wide oscillations, we infer how strongly different pairs and groups of neurons interact in functional terms. We then use network analysis tools to identify groups of neurons that tend to co-activate, and relate these functional communities to known behavioral circuits for locomotion and sensory processing. We find that gap junctions play a key role in shaping functional communities, and that locomotion-related neurons are more functionally segregated than neurons involved in other behaviors, which are more functionally distributed. Our results suggest that some behaviors rely on specialized, functionally isolated circuits, whereas others emerge from the coordinated activity of multiple functional communities.

10
Simulation of neurotransmitter release and its imaging by fluorescent sensors

Gretz, J.; Mohr, J. M.; Hill, B. F.; Andreeva, V.; Erpenbeck, L.; Kruss, S.

2026-03-25 neuroscience 10.64898/2026.03.23.707923 medRxiv
Top 0.2%
1.8%
Show abstract

Cells release signaling molecules such as neurotransmitters that diffuse through the extracellular space and bind to receptors. These signaling molecules can be detected by fluorescent sensors/probes to provide images of the signaling process. Such images are not equivalent to a concentration because diffusion and sensor kinetics affect (convolute) them. Therefore, computational approaches are necessary to disentangle these contributions and allow interpretation of fluorescent sensor-based images. Here, we present a kinetic Monte Carlo framework (FLuorescence Imaging Kinetic Simulation, FLIKS) that simulates signaling molecules undergoing cellular release, stochastic diffusion and reversible binding to sensors in realistic cellular (2D or 3D) geometries. We apply it to model neurotransmitter (dopamine) release in synaptic clefts and for paracrine signaling by immune cells. We also show how sensor location, sensor kinetics and release location affect fluorescence images. For example, we show how sensor sensitivity depends on the distance from the synaptic cleft and changes when dopamine transporters (DAT) clear dopamine. The approach also allows to compare the performance of membrane bound (genetically encoded) sensors versus artificial sensors such as nanosensors placed outside under or around the cells. As an example, we also demonstrate how the images of catecholamine release by immune cells can be modeled and compared to experimental data to better understand the release pattern. This framework provides a quantitative basis for analyzing and interpreting fluorescent sensor imaging data.

11
Coupled beta and high-frequency oscillations emerge from synchronized bursting in a minimal model of the parkinsonian subthalamic nucleus

Sheheitli, H.; Johnson, L. A.; Wang, J.; Aman, J. E.; Vitek, J. L.

2026-04-01 neuroscience 10.64898/2026.03.30.715339 medRxiv
Top 0.3%
1.0%
Show abstract

Local field potentials recorded from the subthalamic nucleus (STN) in Parkinsons disease (PD) exhibit a distinctive multiscale spectral signature: exaggerated beta-band oscillations (13-30 Hz) coupled to high-frequency oscillations (HFOs, 200-400 Hz), with HFO amplitude being phase-locked to the beta cycle. This phase-amplitude coupling (PAC) has been identified as a promising biomarker of the parkinsonian state, yet no biophysical model has explained how it emerges, what determines the HFO frequency, or how HFOs can exist without beta modulation in the medicated STN. Here we show that a heterogeneous population of excitatory Izhikevich neurons with recurrent coupling produces three dynamical regimes: (i) asynchronous tonic firing, (ii) asynchronous bursting, in which neurons burst individually producing broadband HFO power but without coherent population-level PAC, and (iii) synchronous bursting, which gives rise to beta-HFO PAC. The regimes are governed by two biophysically interpretable parameters that capture complementary effects of dopamine depletion: one reflecting changes in intrinsic neuronal excitability, the other reflecting changes in synaptic coupling strength. The transition from asynchronous to synchronous bursting in this model captures the emergence of pathological STN neuronal activity in the parkinsonian state. HFO peak frequency varies continuously across the two-parameter landscape, providing a mechanistic account of the clinically observed shift from slow (200-300 Hz) to fast (300-400 Hz) HFOs between medication states. The character of the synchronization transition depends on baseline excitability, ranging from a sharp co-emergence of bursting and synchrony at low excitability to a decoupled two-stage process at intermediate excitability where burst recruitment precedes synchronization. The model generates testable predictions for future clinical and experimental studies, provides a numerical dissection of how mesoscopic LFP features map onto microscopic neuronal dynamics, and serves as a computational building block for future circuit-level models that can guide brain stimulation strategies tailored to the patient-specific dynamical state of the STN. Author summaryIn Parkinsons disease, local field potentials (LFP) from the subthalamic nucleus (STN) contain two prominent rhythms: a slow beta rhythm (13-30 Hz) and fast oscillations (200-400 Hz). In the parkinsonian state, these rhythms become coupled, with fast oscillation amplitude varying systematically with beta phase, a relationship absent in the medicated state. We built a biophysical spiking neuron network model that captures two key effects of dopamine depletion on STN neuronal activity: changes in the intrinsic neuronal excitability and changes in synaptic coupling strength. The model produces fast oscillations from rapid intraburst firing, while the slow beta rhythm and its coupling to fast oscillations emerge with the onset of synchronized bursting across the population. Importantly, the frequency of the fast oscillations shifts continuously depending on both parameters, explaining a puzzling clinical observation that these oscillations change frequency between medication states. The model also reproduces the modulation pattern in the spike-triggered average of HFO envelope amplitude reported in patient recordings, confirming consistency with single-unit observations as well as LFP-level spectral features. By mapping how multi-timescale LFP spectral features relate to the dynamical regime of the underlying neuronal population, this work offers a framework for brain stimulation strategies informed by patient-specific dynamical states.

12
Automated derivation of mean field models from spiking neural networks for the simulation of brain dynamics

Lorenzi, R. M.; De Grazia, M.; Gandini Wheeler-Kingshott, C. A. M.; Palesi, F.; D'Angelo, E. U.; Casellato, C.

2026-03-20 neuroscience 10.64898/2026.03.18.712631 medRxiv
Top 0.4%
0.7%
Show abstract

A mean field model (MFM) is a mesoscopic description of neuronal population dynamics that can reduce the complexity of neural microcircuits into equations preserving key functional properties. The generation of a MFM is a complex mathematical process that starts with the incorporation of single neuron input/output relationships and local connectivity. Once neuron electroresponsiveness and synaptic properties are defined, in principle, the process can be automatized. Here we develop a tool for automatic MFM derivation from biophysically grounded spiking networks (Auto-MFM) by performing micro-to-mesoscale parameter remapping, estimating input/output relationships specific for different neuronal populations (i.e., transfer functions), and optimizing transfer function parameters. Auto-MFM was tested using a spiking cerebellar circuit as a generative model. The cerebellar MFM derived with Auto-MFM accurately reproduced cerebellar population dynamics of the corresponding spiking network, matching mean and time-varying firing rates across a wide range of stimulation patterns. Auto-MFM allowed us to model and explore physiological and pathological circuit variants; indeed, it was used to map ataxia-related structural connectivity alterations of the cerebellar network, in which Purkinje cells with simplified dendritic structure altered the cerebellar connectivity. Furthermore, Auto-MFM was used to create a library of cerebellar MFMs by sweeping the level of the excitatory conductance at mossy fiber - granule cell synapse, which is altered in several neuropathologies. Auto-MFM is thus proving a flexible and powerful tool to generate region-specific MFMs of healthy and pathological brain networks to be embedded in brain digital models.

13
Explaining temporally clustered errors with an autocorrelated Drift Diffusion Model

Vloeberghs, R.; Tuerlinckx, F.; Urai, A. E.; Desender, K.

2026-03-23 neuroscience 10.64898/2026.03.20.713186 medRxiv
Top 0.4%
0.7%
Show abstract

A widely used framework for studying the computational mechanisms of decision making is the Drift Diffusion Model (DDM). To account for the presence of both fast and slow errors in empirical data, the DDM incorporates across-trial variability in parameters such as the drift rate and the starting point. Although these variability parameters enable the model to reproduce both fast and slow errors, they rely on the assumption that over trials each parameter is independently sampled. As a result, the DDM effectively predicts that errors-- whether fast or slow--occur randomly over time. However, in empirical data this assumption is violated, as error responses are often temporally clustered. To address this limitation, we introduce the autocorrelated DDM, in which trial-to-trial fluctuations in drift rate, starting point, and boundary evolve according to first-order autoregressive (AR1) processes. Using simulations, we demonstrate that, unlike the across-trial variability DDM, the autocorrelated DDM naturally accounts for temporal clustering of errors. We further show that model parameters can be reliably recovered using Amortized Bayesian Inference, even with as few as 500 trials. Finally, fits to empirical data indicate that the autocorrelated DDM provides the best account of error clustering, highlighting that computational parameters fluctuate over time, despite typically being estimated as fixed across trials.

14
Distributed elasticity: a high-reward, moderate-risk strategy for efficient control modulation in insect flight

Wang, L.; Zhang, C.; Asadimoghaddam, N.; Pons, A.

2026-03-25 systems biology 10.64898/2026.03.23.713675 medRxiv
Top 0.5%
0.5%
Show abstract

The environments inhabited by flying insects demand a balance between flight efficiency and flight manoeuvrability. In structural oscillators such as the insect indirect flight motor, efficiency (arising from resonance) and manoeuvrability (arising from kinematic modulation) are typically quid pro quo, with modulation incurring penalties to efficiency. Band-type resonance is a phenomenon that offers, in theory, a strategy to lessen these penalties via careful navigation through a band of efficient kinematic states. However, identifying this band is challenging: no methods exist to identify the complete band in realistic motor models, involving elasticity distributed across thorax and wing. Nor are the effects of elasticity distribution on the band known. In this work, we address both open topics. We present a suite of numerical methods for identifying the complete resonance band in general systems. Applying them to models of the insect flight motor with distributed elasticity--thoracic and wing flexion--reveals that distributed elasticity is moderate-risk but high-reward morphological feature. Well-tuned distributions expand the resonance band over fourfold whereas poorly-tuned distributions completely extinguish the resonance band. These results indicate that distributing elasticity across the insect flight motor can have adaptive value, and motivate broader work identifying distributions across species.

15
Short-term synaptic plasticity at neuron-OPC synapses in the corpus callosum during postnatal development of mice: experimental and computational study

Kula, B.; Chen, T.-J.; Nagy, B.; Hovhannisyan, A.; Terman, D.; Sun, W.; Kukley, M.

2026-04-03 neuroscience 10.64898/2026.03.31.715637 medRxiv
Top 0.6%
0.4%
Show abstract

Glutamatergic neuronal synapses in the mouse neocortex mature during the first two months after birth. A key event during synaptic maturation is a change in short-term synaptic plasticity (STP), i.e. a switch from strong synaptic depression to a weaker depression or even facilitation. Glutamatergic pyramidal neurons located in the cortical layers II/III, layer V, and layer VI project axons through the corpus callosum where they release glutamate along their shafts and form glutamatergic synapses with oligodendrocyte precursor cells (OPCs). Here, we used single-cell electrophysiological recordings in brain slices to investigate synaptic plasticity at neuron-OPC synapses along axonal shafts in the white matter, and applied computation approaches to pinpoint the mechanisms of this plasticity. We found that during postnatal development of mice, there is a switch from short-term synaptic depression to short-term synaptic facilitation at glutamatergic neuron-OPC synapses in the corpus callosum. Synaptic delay of phasic neuron-OPC excitatory postsynaptic current shortens, and the amount of asynchronous release at neuron-OPC synapses decrease as animals mature, indicating that glutamate release becomes more synchronized. Our computational modelling suggests that both pre- and postsynaptic changes may contribute to the functional development and changes of plasticity at neuron-OPC synapses in the white matter. Taking together, our findings indicate that synaptic release machineries located at different sites along the same axon (i.e. axonal shaft in the white matter vs synaptic boutons in the grey matter) mature in a very similar fashion, STP occurs at both synaptic sites, and STP dynamics represent an important event during brain maturation.

16
Modeling the Influence of Bandwidth and Envelope on Categorical Loudness Scaling

Neely, S. T.; Harris, S. E.; Hajicek, J. J.; Petersen, E. A.; Shen, Y.

2026-04-01 neuroscience 10.64898/2026.03.30.715393 medRxiv
Top 0.6%
0.4%
Show abstract

In a loudness-matching paradigm, a reduction in the loudness of sounds with bandwidths less than one-half octave compared to a tone of equal sound pressure level has been observed previously for five-tone complexes at 60 dB SPL centered at 1 kHz. Here, this loudness-reduction phenomenon is explored using band-limited noise across wide ranges of frequency and level. Additionally, these measurements are simulated by a model of loudness judgement based on neural ensemble averaging (NEA), which serves as a proxy for central auditory signal processing. Multi-frequency equal-loudness contours (ELC) were measured for each of the adult participants (N=100) with pure-tone average (PTA) thresholds that ranged from normal to moderate hearing loss using a categorical-loudness-scaling (CLS) paradigm. Presentation level and center frequency of the test stimuli were determined on each trial according to a Bayesian adaptive algorithm, which enabled multi-frequency ELC estimation within about five minutes of testing. Three separate test conditions differed by stimulus type: (1) pure-tone, (2) quarter-octave noise and (3) octave noise. For comparison, loudness judgements for all three stimulus types were also simulated by the NEA model, which comprised a nonlinear, active, time-domain cochlear model with an appended stage of neural spike generation. Mid-bandwidth loudness reduction was observed to be greatest at moderate stimulus levels and frequencies near 1 kHz. This feature was approximated by the NEA model, which suggests involvement of an early stage of the central auditory system in the formation of loudness judgements.

17
Sparse Stimulus Generation Improves Reverse Correlation Efficiency and Interpretability

Gargano, J. A.; Rice, A.; Chari, D. A.; Parrell, B.; Lammert, A. C.

2026-03-26 neuroscience 10.64898/2026.03.24.714012 medRxiv
Top 0.7%
0.3%
Show abstract

Reverse correlation is a widely-used and well-established method for probing latent perceptual representations in which subjects render subjective preference responses to ambiguous stimuli. Stimuli are purposefully designed to have no direct relationship with the target representation (e.g., they are randomly-generated), a property which makes each individual stimulus minimally informative toward reconstructing the target, and often difficult to interpret for subjects. As a result, a large number of stimulus-response pairs must be gathered from a given subject in order for reconstructions to be of sufficient quality, making the task fatiguing. Recent work has demonstrated that the number of trials needed can be substantially reduced using a compressive sensing framework that incorporates the assumption that the target representation can be sparsely represented in some basis into the reconstruction process. Here, we introduce an alternative method that incorporates the sparsity assumption directly into stimulus generation, which holds promise not only for improving efficiency, but also for improving the interpretability of stimuli from subjects perspective. We develop this new method as a mathematical variation of the compressive sensing approach, before conducting one simulation study and two human subjects experiments to assess the benefits of this method to reconstruction quality, sample size efficiency, and subjective interpretability. Results show that sparse stimulus generation improves all three of these areas relative to conventional reverse correlation approaches, and also relative to compressive sensing in most conditions.

18
Branch-specific plasticity explains distal enrichment of retinotopically displaced inputs in visual cortex

Landau, A. T.; Sabatini, B. L.; Clopath, C.

2026-04-03 neuroscience 10.64898/2026.04.01.715858 medRxiv
Top 0.7%
0.3%
Show abstract

Neurons distribute synaptic inputs across their dendritic tree. In layer 2/3 pyramidal cells of primary visual cortex, spines on distal dendrites share somatic orientation preference but have receptive fields displaced in retinotopic space, which supports tuning to visual edges. However, it is not known how synaptic plasticity rules can lead to specialization of tuning properties across dendritic compartments. We demonstrate an experimentally grounded model of compartment-specific spike-timing dependent plasticity (STDP) that accounts for the enrichment of retinotopically-displaced inputs on distal branches. Our previous experimental work revealed compartment-specific calcium signals that predict reduced STDP-mediated depression but preserved potentiation. Based on these findings, we built an STDP model with compartment-specific properties, in which some distal branches are relatively resistant to STDPmediated depression. Synapses on these branches are more likely to stabilize inputs with weaker correlations to postsynaptic spiking. Using a visual input model, we show that compartment-specific reduction in STDP-mediated depression recapitulates in vivo experimental measurements of spine tuning. Furthermore, our experimental results show that reduced STDP-mediated depression is restricted to distal dendritic compartments with complex branching structure and not observed in other distal branches. Therefore, our model makes an untested prediction that complex branches will be hotspots for retinotopically-displaced inputs.

19
Real-Time Embodied Experience Shapes High-Level Reasoning Under Altered Gravity

Grandchamp des Raux, H.; Ghilardi, T.; Ferre, E. R.; Ossmy, O.

2026-03-20 neuroscience 10.64898/2026.03.16.712090 medRxiv
Top 0.7%
0.3%
Show abstract

A critical aspect of human cognition is the ability to use our knowledge about the laws of physics to make predictions about physical events. Whether this ability is based on abstract processes or is grounded in our body-environment interactions remains an open debate. We used physical reasoning under altered gravity as a model system to show that humans real-time embodied experience modifies their high-level physical reasoning. Specifically, we tested participants in computerised reasoning games, while disrupting their gravitational signalling using Galvanic Vestibular Stimulation (GVS). Participants failed more and had suboptimal strategies under the GVS condition compared to no-GVS in games requiring reasoning about terrestrial gravity. However, the effects of GVS were reduced when the games included reasoning about altered gravity. Our findings demonstrate how the physical experience of the body shifts high-level cognitive skill as reasoning, suggesting that humans mental representation of the world is grounded in adaptable physical mechanisms.

20
Passive neuromodulation: an energy-driven mechanism for closed-loop suppression of epileptic seizure

Acharya, G.; Huang, A.; Santhakumar, V.; Nozari, E.

2026-03-30 neuroscience 10.64898/2026.03.26.714592 medRxiv
Top 0.8%
0.2%
Show abstract

For decades, electrical neuromodulation has been used as a therapeutic mechanism to disrupt and desynchronize pathological neural activity in various neurological disorders. Despite notable progress, however, patient outcomes remain highly variable, particularly in medically intractable epilepsy where surgery still provides the greatest chance of seizure freedom. Here we propose passive neuromodulation (PNM) as a radical alternative to conventional neurostimulation, whereby analogue feedback is used to drain energy from an epileptic circuit and thus suppress the initiation or spread of electrographic seizures. We provide pilot evidence on the efficacy and robustness of PNM using two computational models of epileptic dynamics: a detailed biophysical network model of dentate gyrus, and the Epileptor neural mass model of seizure dynamics. Despite the vast differences between these models, our results show the robust ability of PNM to suppress seizures in both models. We further demonstrate the efficacy and robustness of responsive PNM, whereby brief (50ms) windows of PNM are triggered by a simultaneously-running seizure detection algorithm, as well as the safe and tunable nature of PNM, where more robust seizure suppression can be achieved by parametrically titrating the amount of power drained from the tissue, without inducing any seizures even if applied interictally. Overall, our results provide strong evidence on the promise of PNM for the closed-loop control of epileptic seizures and other neurological disorders where damping pathological network activity can restore healthy dynamics.